Goto

Collaborating Authors

 stability ai


AI's Memorization Crisis

The Atlantic - Technology

Large language models don't "learn"--they copy. And that could change everything for the tech industry. O n Tuesday, researchers at Stanford and Yale revealed something that AI companies would prefer to keep hidden. Four popular large language models--OpenAI's GPT, Anthropic's Claude, Google's Gemini, and xAI's Grok--have stored large portions of some of the books they've been trained on, and can reproduce long excerpts from those books. In fact, when prompted strategically by researchers, Claude delivered the near-complete text of,,, and, in addition to thousands of words from books including and .


AI firm wins high court ruling after photo agency's copyright claim

The Guardian

Stability AI's model allows users to generate images with text prompts. Stability AI's model allows users to generate images with text prompts. There was evidence that Getty's images were used to train Stability's model, which allows users to generate images with text prompts. Stability was also found to have infringed Getty's trademarks in some cases. The judge, Mrs Justice Joanna Smith, said the question of where to strike the balance between the interests of the creative industries on one side and the AI industry on the other was "of very real societal importance".


AI Isn't Coming for Hollywood. It Has Already Arrived

WIRED

Lady Gaga probably wasn't thinking that a coup would unfold in her greenhouse. Then again, she was cohosting a party there with Sean Parker, the billionaire founder of Napster and first president of Facebook. It was February 2024, and the singer had invited guests to her 22.5 million oceanside estate in Malibu to mark the launch of a skin-care nonprofit. One of the organization's trustees was her boyfriend, whose day job was running the Parker Foundation. In the candlelit space, beside floor-to-ceiling windows that looked out over the Pacific, Parker's people mingled with Gaga's, nibbling focaccia and branzino alla brace to music from a string quartet (Grammy-winning, of course).


London AI firm says Getty copyright case poses 'overt threat' to industry

The Guardian

Stability allows users to generate images using text prompts, and its directors include James Cameron, the Oscar-winning film director of Avatar and Titanic. But Getty called the people who were training the AI system "a bunch of tech geeks" and claimed they were indifferent to the problems their innovation might create. Stability countered by alleging that Getty was using "fanciful" legal routes and spending approximately 10m to fight a technology it feared was "an existential threat" to its business. As a result the program, called Stability Diffusion, outputs images with Getty Images watermarks still on them. Getty alleges that Stability was "completely indifferent to what they fed into the training data".


Adoption of Watermarking for Generative AI Systems in Practice and Implications under the new EU AI Act

Rijsbosch, Bram, van Dijck, Gijs, Kollnig, Konrad

arXiv.org Artificial Intelligence

AI-generated images have become so good in recent years that individuals cannot distinguish them any more from "real" images. This development creates a series of societal risks, and challenges our perception of what is true and what is not, particularly with the emergence of "deep fakes" that impersonate real individuals. Watermarking, a technique that involves embedding identifying information within images to indicate their AI-generated nature, has emerged as a primary mechanism to address the risks posed by AI-generated images. The implementation of watermarking techniques is now becoming a legal requirement in many jurisdictions, including under the new 2024 EU AI Act. Despite the widespread use of AI image generation systems, the current status of watermarking implementation remains largely unexamined. Moreover, the practical implications of the AI Act's watermarking requirements have not previously been studied. The present paper therefore both provides an empirical analysis of 50 of the most widely used AI systems for image generation, and embeds this empirical analysis into a legal analysis of the AI Act. We identify four categories of generative AI image systems relevant under the AI Act, outline the legal obligations for each category, and find that only a minority number of providers currently implement adequate watermarking practices.


'I'm going to sue the living pants off them': AI's big legal showdown – and what it means for Dr Strange's hair

The Guardian

The first piece of AI-generated video I ever made moved me to tears – tears of laughter. Given the chance to fool around with Runway AI's Gen-3 Alpha, I dropped in an image of an eagle carrying off a wolf. Moments later, the picture sprang into life. Except the bird only had one leg – and its plummeting prey sprouted wings from its tail and morphed into a wolf-headed goose. It was weird and hilarious.


Stable Diffusion 3.5 follows your prompts more closely and generates more diverse people

Engadget

Stable Diffusion, an open-source alternative to AI image generators like Midjourney and DALL-E, has been updated to version 3.5. The new model tries to right some of the wrongs (which may be an understatement) of the widely panned Stable Diffusion 3 Medium. Stability AI says the 3.5 model adheres to prompts better than other image generators and competes with much larger models in output quality. In addition, it's tuned for a greater diversity of styles, skin tones and features without needing to be prompted to do so explicitly. The new model comes in three flavors.


Stability AI's audio generator can now crank out 3 minute 'songs'

Engadget

Stability AI just unveiled Stable Audio 2.0, an upgraded version of its music-generation platform. This system lets users create up to three minutes of audio via text prompt. Just imagine the fake birthday song you could make in the style of that one Rob Thomas/Santana track. The tool is free and publicly available through the company's website, so have at it. Introducing Stable Audio 2.0 – a new model capable of producing high-quality, full tracks with coherent musical structure up to three minutes long at 44.1 kHz stereo from a single prompt.


Here's Proof You Can Train an AI Model Without Slurping Copyrighted Content

WIRED

A group of researchers backed by the French government have released what is thought to be the largest AI training dataset composed entirely of text that is in the public domain. "There's no fundamental reason why someone couldn't train an LLM fairly," says Ed Newton-Rex, CEO of Fairly Trained. He founded the nonprofit in January 2024 after quitting his executive role at image generation startup Stability AI because he disagreed with its policy of scraping content without permission. Fairly Trained offers a certification to companies willing to prove that they've trained their AI models on data that they either own, have licensed, or is in the public domain. When the nonprofit launched, some critics pointed out that it hadn't yet identified a large language model that met those requirements.


Stable Diffusion 3 is a new AI image generator that won't mess up text in pictures, its makers claim

Engadget

Stability AI, the startup behind Stable Diffusion, the tool that uses generative AI to create images from text prompts, revealed Stable Diffusion 3, a next-generation model, on Thursday. Stability AI claimed that the new model, which isn't widely available yet, improves image quality, works better with prompts containing multiple subjects, and can more accurate text as part of the generated image, something that previous Stable Diffusion models weren't great at. Stability AI CEO Emad Mosque posted some examples of this on X. The announcement comes days after Stability AI's largest rival, OpenAI, unveiled Sora, a brand new AI model capable of generating nearly-realistic, high-definition videos from simple text prompts. Sora, which isn't available to the general public yet either, sparked concerns about its potential to create realistic-looking fake footage.